107 research outputs found
Teaching communication behaviour through dance and movement to children with Autism Spectrum Disorder (ASD) in Sarawak / Teo Jing Xin...[et al.]
This paper highlights the struggles of parents in Sarawak that have children with autism spectrum disorders (ASD), and introduces the authors’ proposed intervention programme for this population. With the belief that parental buy-in to an intervention would result in a higher level of follow-through and therefore improvement to the benefit of their children, the Fun, Inclusive, and Tolerant (FIT) dance and movement based behavioural intervention was designed and developed for Sarawakian children on the spectrum, specifically with the objective to acknowledge and address parental cultural narratives, desires, and expectations; while teaching appropriate behaviours to their children. The focus of this paper is two-fold. Firstly, to demonstrate how dances in this programme were created, by explaining the formulation of an individual dance which was developed for a child with an expressive speech delay; and secondly, to present parental feedback regarding the programme. Concluding remarks touch upon the authors’ future directions in this research
RetSeg: Retention-based Colorectal Polyps Segmentation Network
Vision Transformers (ViTs) have revolutionized medical imaging analysis,
showcasing superior efficacy compared to conventional Convolutional Neural
Networks (CNNs) in vital tasks such as polyp classification, detection, and
segmentation. Leveraging attention mechanisms to focus on specific image
regions, ViTs exhibit contextual awareness in processing visual data,
culminating in robust and precise predictions, even for intricate medical
images. Moreover, the inherent self-attention mechanism in Transformers
accommodates varying input sizes and resolutions, granting an unprecedented
flexibility absent in traditional CNNs. However, Transformers grapple with
challenges like excessive memory usage and limited training parallelism due to
self-attention, rendering them impractical for real-time disease detection on
resource-constrained devices. In this study, we address these hurdles by
investigating the integration of the recently introduced retention mechanism
into polyp segmentation, introducing RetSeg, an encoder-decoder network
featuring multi-head retention blocks. Drawing inspiration from Retentive
Networks (RetNet), RetSeg is designed to bridge the gap between precise polyp
segmentation and resource utilization, particularly tailored for colonoscopy
images. We train and validate RetSeg for polyp segmentation employing two
publicly available datasets: Kvasir-SEG and CVC-ClinicDB. Additionally, we
showcase RetSeg's promising performance across diverse public datasets,
including CVC-ColonDB, ETIS-LaribPolypDB, CVC-300, and BKAI-IGH NeoPolyp. While
our work represents an early-stage exploration, further in-depth studies are
imperative to advance these promising findings.Comment: Updated PD
cellSTORM - Cost-effective Super-Resolution on a Cellphone using dSTORM
Expensive scientific camera hardware is amongst the main cost factors in
modern, high-performance microscopes. Recent technological advantages have,
however, yielded consumer-grade camera devices that can provide surprisingly
good performance. The camera sensors of smartphones in particular have
benefited of this development. Combined with computing power and due to their
ubiquity, smartphones provide a fantastic opportunity for "imaging on a
budget". Here we show that a consumer cellphone is capable even of optical
super-resolution imaging by (direct) Stochastic Optical Reconstruction
Microscopy (dSTORM), achieving optical resolution better than 80 nm. In
addition to the use of standard reconstruction algorithms, we investigated an
approach by a trained image-to-image generative adversarial network (GAN). This
not only serves as a versatile technique to reconstruct video sequences under
conditions where traditional algorithms provide sub-optimal localization
performance, but also allows processing directly on the smartphone. We believe
that "cellSTORM" paves the way for affordable super-resolution microscopy
suitable for research and education, expanding access to cutting edge research
to a large community
Cancelable iris Biometrics based on data hiding schemes
The Cancelable Biometrics is a template protection scheme that can replace a stolen or lost biometric template. Instead of the original biometric template, Cancelable biometrics stores a modified version of the biometric template. In this paper, we have proposed a Cancelable biometrics scheme for Iris based on the Steganographic technique. This paper presents a non-invertible transformation function by combining Huffman Encoding and Discrete Cosine Transformation (DCT). The combination of Huffman Encoding and DCT is basically used in steganography to conceal a secret image in a cover image. This combination is considered as one of the powerful non-invertible transformation where it is not possible to extract the exact secret image from the Stego-image. Therefore, retrieving the exact original image from the Stego-image is nearly impossible. The proposed non-invertible transformation function embeds the Huffman encoded bit-stream of a secret image in the DCT coefficients of the iris texture to generate the transformed template. This novel method provides very high security as it is not possible to regenerate the original iris template from the transformed (stego) iris template. In this paper, we have also improved the segmentation and normalization process
Investigation of ConViT on COVID-19 Lung Image Classification and the Effects of Image Resolution and Number of Attention Heads
COVID-19 has been one of the popular foci in the research community since its first outbreak in China, 2019. Radiological patterns such as ground glass opacity (GGO) and consolidations are often found in CT scan images of moderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Another potential method is the use of vision transformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance. Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the model with 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. By using 128x128 image pixels resolution, training using 16 attention heads, the ConViT model has achieved an accuracy of 98.01%, sensitivity of 90.83%, specificity of 99.69%, positive predictive value (PPV) of 95.58%, negative predictive value (NPV) of 97.89% and F1-score of 94.55%. The model has also achieved improved performance over other recent studies that used the same dataset. In conclusion, this study has shown that the ConViT model can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients
Investigation of ConViT on COVID-19 Lung Image Classification and the Effects of Image Resolution and Number of Attention Heads
COVID-19 has been one of the popular foci in the research community since its first outbreak in China, 2019. Radiological patterns such as ground glass opacity (GGO) and consolidations are often found in CT scan images of moderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Another potential method is the use of vision transformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance. Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the model with 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. By using 128x128 image pixels resolution, training using 16 attention heads, the ConViT model has achieved an accuracy of 98.01%, sensitivity of 90.83%, specificity of 99.69%, positive predictive value (PPV) of 95.58%, negative predictive value (NPV) of 97.89% and F1-score of 94.55%. The model has also achieved improved performance over other recent studies that used the same dataset. In conclusion, this study has shown that the ConViT model can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients
Recommended from our members
Excited-State Dynamics in Borylated Arylisoquinoline Complexes in Solution and in cellulo
Two four-coordinate organoboron N,C-chelate complexes with different functional terminals on the PEG chains are studied with respect to their photophysical properties within human MCF-7 cells. Their excited-state properties are characterized by time-resolved pump-probe spectroscopy and fluorescence lifetime microscopy. The excited-state relaxation dynamics of the two complexes are similar when studied in DMSO. Aggregation of the complexes with the carboxylate terminal group is observed in water. When studying the light-driven excited-state dynamics of both complexes in cellulo, i. e., after being taken up into human MCF-7 cells, both complexes show different features depending on the nature of the anchoring PEG chains. The lifetime of a characteristic intramolecular charge-transfer state is significantly shorter when studied in cellulo (360±170 ps) as compared to in DMSO (∼960 ps) at 600 nm for the complexes with an amino group. However, the kinetics of the complexes with the carboxylate group are in line with those recorded in DMSO. On the other hand, the lifetimes of the fluorescent state are almost identical for both complexes in cellulo. These findings underline the importance to evaluate the excited-state properties of fluorophores in a complex biological environment in order to fully account for intra- and intermolecular effects governing the light-induced processes in functional dyes
- …